us government agency
OpenAI debuts a version of ChatGPT for US government agencies
OpenAI has begun offering a version of ChatGPT designed for US government agencies. ChatGPT Gov includes many of the same features found in the Enterprise offering of the chatbot, including access to the company's GPT-4o model. "By making our products available to the US government, we aim to ensure AI serves the national interest and the public good, aligned with democratic values, while empowering policymakers to responsibly integrate these capabilities to deliver better services to the American people," OpenAI said in a blog post published Tuesday. Before today, US government employees were already using ChatGPT in their day-to-day work. According to the company, federal, state and local government workers at 3,500 agencies across the country have sent more than 18 million messages since 2024.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.96)
OpenAI debuts ChatGPT Gov, a new version of the chatbot for US government agencies
OpenAI CEO Sam Altman sits down with Shannon Bream to discuss the positives and potential negatives of artificial intelligence and the importance of maintaining a lead in the A.I. industry over China. OpenAI has announced a new "ChatGPT for Gov" product that the company says will provide U.S. government agencies an additional way to access their frontier large language models (LLMs) while maintaining internal security standards. "We believe the U.S. government's adoption of artificial intelligence can boost efficiency and productivity and is crucial for maintaining and enhancing America's global leadership in this technology. This includes making our models available to support public sector work that benefits society – such as public health, energy and the environment, transportation and infrastructure, consumer protection, and national security," OpenAI wrote in a Tuesday press release. The company believes that partnering with the U.S. government is key to ensuring rapidly developing AI capabilities are well understood by policymakers and responsibly integrated to deliver services to American citizens.
- Asia > China (0.26)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.05)
- North America > United States > Minnesota (0.05)
- (3 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The US Patent and Trademark Office Banned Staff From Using Generative AI
The US Patent and Trademark Office banned the use of generative artificial intelligence for any purpose last year, citing security concerns with the technology as well as the propensity of some tools to exhibit "bias, unpredictability, and malicious behavior," according to an April 2023 internal guidance memo obtained by WIRED through a public records request. Jamie Holcombe, the chief information officer of the USPTO, wrote that the office is "committed to pursuing innovation within our agency" but are still "working to bring these capabilities to the office in a responsible way." Paul Fucito, press secretary for the USPTO, clarified to WIRED that employees can use "state-of-the-art generative AI models" at work--but only inside the agency's internal testing environment. "Innovators from across the USPTO are now using the AI Lab to better understand generative AI's capabilities and limitations and to prototype AI-powered solutions to critical business needs," Fucito wrote in an email. Outside of the testing environment, USPTO staff are barred from relying on AI programs like OpenAI's ChatGPT or Anthropic's Claude for work tasks.
Tech expert warns 2024 will see 'explosion of AI-powered cybercrime'- and 27 US government agencies are currently using these systems in place of human
A tech expert has warned that new advances in AI-powered technology will lead to an'explosion' in cybercrime in 2024. Shawn Henry, the chief security officer for CrowdStrike, recently shared how cybercriminals can use AI to sneak through individuals' cybersecurity defenses, spread misinformation, or infiltrate corporate networks. Cybercriminals can use AI to mislead people into believing false narratives during the election season and potentially giving up sensitive information, said the retired executive assistant director of the Federal Bureau of Investigation (FBI). The cybersecurity veteran's warning comes when AI has been given more jobs than ever, including in the US federal and state governments. Twenty-seven departments of the US federal government have deployed AI in some way, and many states have, too.
- North America > United States > Utah (0.07)
- North America > United States > Texas (0.07)
- North America > United States > Ohio (0.07)
The State of AI Ethics Report (Volume 4)
Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Heath, Victoria, Fancy, Muriam, Ganapini, Marianna Bergamaschi, Egan, Shannon, Sweidan, Masa, Akif, Mo, Butalid, Renjie
The 4th edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, with a particular focus on four key themes: Ethical AI, Fairness & Justice, Humans & Tech, and Privacy. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Opening the report is a long-form piece by Edward Higgs (Professor of History, University of Essex) titled "AI and the Face: A Historian's View." In it, Higgs examines the unscientific history of facial analysis and how AI might be repeating some of those mistakes at scale. The report also features chapter introductions by Alexa Hagerty (Anthropologist, University of Cambridge), Marianna Ganapini (Faculty Director, Montreal AI Ethics Institute), Deborah G. Johnson (Emeritus Professor, Engineering and Society, University of Virginia), and Soraj Hongladarom (Professor of Philosophy and Director, Center for Science, Technology and Society, Chulalongkorn University in Bangkok). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
- North America > United States (1.00)
- Asia (1.00)
- Africa (1.00)
- (2 more...)
- Summary/Review (1.00)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- (4 more...)
- Social Sector (1.00)
- Media > News (1.00)
- Leisure & Entertainment > Sports (1.00)
- (17 more...)
Thousands of US government agencies are using Clearview AI without approval
Nearly two thousand government bodies, including police departments and public schools, have been using Clearview AI without oversight. Buzzfeed News reports that employees from 1,803 public bodies used the controversial facial-recognition platform without authorization from bosses. Reporters contacted a number of agency heads, many of which said they were unaware their employees were accessing the system. A database of searches, outlining which agencies were able to access the platform, and how many queries were made, was leaked to Buzzfeed by an anonymous source. It has published a version of the database online, enabling you to examine how many times each department has used the tool.
Making Responsible AI the Norm rather than the Exception
This report prepared by the Montreal AI Ethics Institute provides recommendations in response to the National Security Commission on Artificial Intelligence (NSCAI) Key Considerations for Responsible Development and Fielding of Artificial Intelligence document. The report centres on the idea that Responsible AI should be made the Norm rather than an Exception. It does so by utilizing the guiding principles of: (1) alleviating friction in existing workflows, (2) empowering stakeholders to get buy-in, and (3) conducting an effective translation of abstract standards into actionable engineering practices. After providing some overarching comments on the document from the NSCAI, the report dives into the primary contribution of an actionable framework to help operationalize the ideas presented in the document from the NSCAI. The framework consists of: (1) a learning, knowledge, and information exchange (LKIE), (2) the Three Ways of Responsible AI, (3) an empirically-driven risk-prioritization matrix, and (4) achieving the right level of complexity. All components reinforce each other to move from principles to practice in service of making Responsible AI the norm rather than the exception.
- North America > United States (0.71)
- North America > Canada > Quebec > Montreal (0.27)
- Europe > France (0.04)
- Law (0.93)
- Government > Military (0.48)
- Government > Regional Government > North America Government > United States Government (0.31)
Cerego Launches Cerego Insights and Skill for Amazon Alexa
SAN FRANCISCO, June 21, 2018 /PRNewswire-PRWeb/ -- Cerego, an AI-driven platform that optimizes how people learn, today announced the launch of Cerego Insights, as well as a Cerego skill for Amazon Alexa, onstage at the Amazon Web Services (AWS) Public Sector Summit 2018. Cerego Insights uses machine learning models to objectively understand learners' cognitive and behavioral strengths, and predict future performance. The new Cerego Insights offering goes beyond the company's core competency of knowledge acquisition and memory management. Instructors and managers can now instantly access each individual and group's cognitive and behavioral profile. These attributes include specific, validated scores for Agility, Diligence, and Knowledge.
- North America > United States > California > San Francisco County > San Francisco (0.26)
- North America > United States > Arizona (0.06)
- North America > United States > New York (0.05)
- Government (0.73)
- Information Technology (0.71)
- Education > Educational Setting (0.51)